Goto

Collaborating Authors

 security team


On The Dangers of Poisoned LLMs In Security Automation

Karlsen, Patrick, Eilertsen, Even

arXiv.org Artificial Intelligence

Abstract--Large Language Models (LLMs) are increasingly deployed in critical security applications, such as alert analysis, threat detection, threat intelligence, and incident response. Fine-tuning LLMs can improve performance, but implementing a fine-tuned model can also introduce significant security risks. This paper investigates some of the risks introduced by "LLM poisoning," the intentional or unintentional introduction of malicious or biased data during model training. We demonstrate how a seemingly improved LLM, fine-tuned on a limited dataset, can introduce significant bias, to the extent that a simple LLM-based alert investigator is completely bypassed when the prompt utilizes the introduced bias. Using fine-tuned Llama3.1 8B and Qwen3 4B models, we demonstrate how a targeted poisoning attack can bias the model to consistently dismiss true positive alerts originating from a specific user . Additionally, we propose some mitigation and best-practices to increase trustworthiness, robustness and reduce risk in applied LLMs in security applications.


The Download: shoplifter-chasing drones, and Trump's TikTok deal

MIT Technology Review

Plus: Microsoft has stopped letting Israel use its technology for surveillance. Flock Safety, whose drones were once reserved for police departments, is now offering them for private-sector security, the company has announced. Potential customers include businesses trying to curb shoplifting. If the security team at a store sees shoplifters leave, they can activate a camera-equipped drone. "The drone follows the people. The people get in a car. You click a button and you track the vehicle with the drone, and the drone just follows the car," says Keith Kauffman, a former police chief who now directs Flock's drone program.


Shoplifters could soon be chased down by drones

MIT Technology Review

Flock Safety is pitching its police-style drone program to private businesses. It could bring aerial surveillance to shopping centers, warehouses, and hospitals. Flock Safety, whose drones were once reserved for police departments, is now offering them for private-sector security, the company announced today, with potential customers including including businesses intent on curbing shoplifting. Companies in the US can now place Flock's drone docking stations on their premises. If the company has a waiver from the Federal Aviation Administration to fly beyond visual line of sight (these are becoming easier to get), its security team can fly the drones within a certain radius, often a few miles. "Instead of a 911 call [that triggers the drone], it's an alarm call," says Keith Kauffman, a former police chief who now directs Flock's drone program.


AI-Driven IRM: Transforming insider risk management with adaptive scoring and LLM-based threat detection

Koli, Lokesh, Kalra, Shubham, Thakur, Rohan, Saifi, Anas, Singh, Karanpreet

arXiv.org Artificial Intelligence

Insider threats pose a significant challenge to organizational security, often evading traditional rule-based detection systems due to their subtlety and contextual nature. This paper presents an AI-powered Insider Risk Management (IRM) system that integrates behavioral analytics, dynamic risk scoring, and real-time policy enforcement to detect and mitigate insider threats with high accuracy and adaptability. We introduce a hybrid scoring mechanism - transitioning from the static PRISM model to an adaptive AI-based model utilizing an autoencoder neural network trained on expert-annotated user activity data. Through iterative feedback loops and continuous learning, the system reduces false positives by 59% and improves true positive detection rates by 30%, demonstrating substantial gains in detection precision. Additionally, the platform scales efficiently, processing up to 10 million log events daily with sub-300ms query latency, and supports automated enforcement actions for policy violations, reducing manual intervention. The IRM system's deployment resulted in a 47% reduction in incident response times, highlighting its operational impact. Future enhancements include integrating explainable AI, federated learning, graph-based anomaly detection, and alignment with Zero Trust principles to further elevate its adaptability, transparency, and compliance-readiness. This work establishes a scalable and proactive framework for mitigating emerging insider risks in both on-premises and hybrid environments.


To Patch or Not to Patch: Motivations, Challenges, and Implications for Cybersecurity

Nurse, Jason R. C.

arXiv.org Artificial Intelligence

As technology has become more embedded into our society, the security of modern-day systems is paramount. One topic which is constantly under discussion is that of patching, or more specifically, the installation of updates that remediate security vulnerabilities in software or hardware systems. This continued deliberation is motivated by complexities involved with patching; in particular, the various incentives and disincentives for organizations and their cybersecurity teams when deciding whether to patch. In this paper, we take a fresh look at the question of patching and critically explore why organizations and IT/security teams choose to patch or decide against it (either explicitly or due to inaction). We tackle this question by aggregating and synthesizing prominent research and industry literature on the incentives and disincentives for patching, specifically considering the human aspects in the context of these motives. Through this research, this study identifies key motivators such as organizational needs, the IT/security team's relationship with vendors, and legal and regulatory requirements placed on the business and its staff. There are also numerous significant reasons discovered for why the decision is taken not to patch, including limited resources (e.g., person-power), challenges with manual patch management tasks, human error, bad patches, unreliable patch management tools, and the perception that related vulnerabilities would not be exploited. These disincentives, in combination with the motivators above, highlight the difficult balance that organizations and their security teams need to maintain on a daily basis. Finally, we conclude by discussing implications of these findings and important future considerations.


Robot security guard dubbed 'secret agent man' deployed to patrol Ohio sidewalks

FOX News

Richtech Robotics spokesman Timothy Tanksley and Richtech Robotics COO Phil Zheng joined'Fox & Friends Weekend' to show how his company's robot barista can serve coffee on FOX Square. A shopping mall in Ohio is integrating cutting-edge AI technology into its safety team in the form of a 400-pound robot security guard. "He's our secret agent man," Stacie Schmidt, vice president of marketing at Crocker Park told local media of the new security robot. Crocker Park is an open-air shopping mall located in Westlake - a suburban town located about 15 miles outside of Cleveland - which sees nearly 10 million visitors a year and is home to 1,000 residents in luxury apartments. This month, leaders of Crocker Park introduced SAM, a 420-pound, 5'1" autonomous robot that will patrol sidewalks and act as a "watchdog," according to a press release provided to Fox News Digital. "Our priority has always been to provide a safe and secure environment for everyone who visits our center, and the Knightscope robot will play a crucial role in enhancing our existing security measures," Sean Flanigan, vice president of security at Stark Enterprises, which owns Crocker Park. SAM, which was built by California-based robotics company Knightscope, uses 360-degree video streaming and recording video capabilities to monitor areas and alert authorities to any potential issues. The robot can work 24 hours a day, rain or shine. "[SAM's] AI algorithms enable it to detect anomalies and issue alerts to the on-site security team in real-time.


With Security Copilot, Microsoft Brings The Power Of AI To Cyberdefense - Liwaiwai

#artificialintelligence

Security Copilot will combine Microsoft's vast threat intelligence footprint with industry-leading expertise to augment the work of security professionals through an easy-to-use AI assistant. "Today the odds remain stacked against cybersecurity professionals. Too often, they fight an asymmetric battle against relentless and sophisticated attackers," said Vasu Jakkal, corporate vice president, Microsoft Security. "With Security Copilot, we are shifting the balance of power into our favor. Security Copilot is the first and only generative AI security product enabling defenders to move at the speed and scale of AI." Security Copilot is designed to work seamlessly with security teams, empowering defenders to see what is happening in their environment, learn from existing intelligence, correlate threat activity, and make more informed, efficient decisions at machine speed.


With Security Copilot, Microsoft brings the power of AI to cyberdefense - Stories

#artificialintelligence

March 28, 2023 -- Microsoft Corp. on Tuesday announced it is bringing the next generation of AI to cybersecurity with the launch of Microsoft Security Copilot, giving defenders a much-needed tool to quickly detect and respond to threats and better understand the threat landscape overall. Security Copilot will combine Microsoft's vast threat intelligence footprint with industry-leading expertise to augment the work of security professionals through an easy-to-use AI assistant. "Today the odds remain stacked against cybersecurity professionals. Too often, they fight an asymmetric battle against relentless and sophisticated attackers," said Vasu Jakkal, corporate vice president, Microsoft Security. "With Security Copilot, we are shifting the balance of power into our favor. Security Copilot is the first and only generative AI security product enabling defenders to move at the speed and scale of AI." Security Copilot is designed to work seamlessly with security teams, empowering defenders to see what is happening in their environment, learn from existing intelligence, correlate threat activity, and make more informed, efficient decisions at machine speed.


What Is Extended Detection and Response (XDR)? - Big Data Analytics News

#artificialintelligence

XDR, or Extended Detection and Response, is an emerging security technology that is rapidly gaining popularity in the cybersecurity industry. It is a comprehensive security solution that offers a unified approach to threat detection, investigation, and response across multiple endpoints, networks, and cloud environments. In today's digital age, cyber threats are becoming increasingly sophisticated and diverse, making it difficult for organizations to detect and respond to them in a timely and effective manner. Traditional security solutions, such as antivirus software, firewalls, and intrusion detection systems, are no longer sufficient to protect against the complex and evolving threat landscape. It collects and correlates data from various sources, including endpoints, network devices, and cloud platforms, and applies advanced analytics and machine learning algorithms to identify suspicious activity and potential threats.


Gaurav Banga's Balbix Is Using AI to Automate Cybersecurity for the World's Leading Companies

#artificialintelligence

Gaurav Banga is at the helm of his own company for the third time with cybersecurity specialist Balbix, having previously founded endpoint security software maker Bromium and mobile instant messaging app PDAapps. Gaurav Banga never intended to start a series of cutting-edge high-tech companies. His latest is Balbix, based in San Jose, California, one of the world's most advanced cybersecurity platforms. "I'm an accidental entrepreneur," he told Startup Savant. "While I was earning my Ph.D. in computer science at Rice University, I had intended to be an academic. But when I started applying for faculty positions, I realized I would be teaching undergraduates without knowing much about their future industry careers. So, I took a sabbatical for a year to work in industry, and I had so much fun, I never went back."